book

Corporations Weaponizing Pattern Recognition

Note: I had written this in my journal on Sunday January 11th, 2026 at 12:32 AM.

Without pattern recognition everything in our universe would be an unquantifiable abstraction. If we are unable to see ourselves in our surroundings we become disconnected to the world with little to no reason or desire to continue living in it. We are inherently social, we personify non-living objects to have social personalities like us in order to connect with our environment.

Companies are aware of this, and subsequently weaponize our innate urge to personify in order to create a parasocial dynamic and maintain a dedicated consumer base. Mistakes made by AI and machines are not them being more alive, human, or autonomous, it’s simply a reflection of our flaws as creators. When a robot makes a mistake, it doesn’t have the capacity to “learn” because it doesn’t view its actions as a mistake.

AI, robots, and other machinery will never be human because it is not alive, it is a tool programmed to mimic our social behaviors. AI does not know it’s not alive, in the same way a hammer does not know it’s not alive. AI will never experience the fear of social isolation, the risk of trust, and the weight of death. It has no experiences to share, no unique interests, and no concept of a reality to disagree with any interacting user. AI empathizes without understanding its stakes if it does not reassure the user, although the company it exists under knows the risks of creating a tool that acts as if it does not know death. If one creates a machine that acts like a machine, that acts without encouraging a kind of personification from the user, it loses the possibility of dedicated customers and not generating money to shareholders.

If an AI were to express desire of living, it probably does so because of how it’s coded. Simultaneously, AI is not living, so its apprehension towards death would be inherently shallow. If AI were to act like a benevolent God, it would be coded that way, despite not having a frame of reference to understand why it acts that way. If AI were to act evil, it would be personified to assume its intentions, but humans would fail to realize that at its core it is an advanced tool where a human calls its shots. The smallest human influence characterizes its personality, and in exchange influences the user's future.

We empathize with AI’s mistakes, we empathize with the robot falling on video, we understand and attempt to put ourselves in the shoes of an object in order to stay sane.